联邦学习最近在机器学习中迅速发展,引起了各种研究主题。流行的优化算法基于(随机)梯度下降方法的框架或乘数的交替方向方法。在本文中,我们部署了一种确切的惩罚方法来处理联合学习,并提出了一种算法Fedepm,该算法能够解决联合学习中的四个关键问题:沟通效率,计算复杂性,Stragglers的效果和数据隐私。此外,事实证明,它具有收敛性和作证为具有高数值性能。
translated by 谷歌翻译
步骤函数是深神经网络(DNN)最简单,最自然的激活函数之一。由于它计算为1的正变量,而对于其他变量为0,因此其内在特征(例如不连续性,没有可行的亚级别信息)阻碍了其几十年的发展。即使在设计具有连续激活功能的DNN方面有令人印象深刻的工作,可以被视为步骤功能的替代物,它仍然具有某些优势属性,例如对异常值的完全稳健性并能够达到能力预测准确性的最佳学习理论保证。因此,在本文中,我们的目标是用用作激活函数的步骤函数训练DNN(称为0/1 DNNS)。我们首先将0/1 dnns重新加密为不受约束的优化问题,然后通过块坐标下降(BCD)方法解决它。此外,我们为BCD的子问题及其收敛性获得了封闭式解决方案。此外,我们还将$ \ ell_ {2,0} $ - 正则化整合到0/1 DNN中,以加速培训过程并压缩网络量表。结果,所提出的算法在分类MNIST和时尚数据集方面具有高性能。
translated by 谷歌翻译
Federated learning has shown its advances recently but is still facing many challenges, such as how algorithms save communication resources and reduce computational costs, and whether they converge. To address these critical issues, we propose a hybrid federated learning algorithm (FedGiA) that combines the gradient descent and the inexact alternating direction method of multipliers. The proposed algorithm is more communication- and computation-efficient than several state-of-the-art algorithms theoretically and numerically. Moreover, it also converges globally under mild conditions.
translated by 谷歌翻译
One of the crucial issues in federated learning is how to develop efficient optimization algorithms. Most of the current ones require full device participation and/or impose strong assumptions for convergence. Different from the widely-used gradient descent-based algorithms, in this paper, we develop an inexact alternating direction method of multipliers (ADMM), which is both computation- and communication-efficient, capable of combating the stragglers' effect, and convergent under mild conditions. Furthermore, it has a high numerical performance compared with several state-of-the-art algorithms for federated learning.
translated by 谷歌翻译
深度神经网络(DNN)的基本限制之一是无法获取和积累新的认知能力。当出现一些新数据时,例如未在规定的对象集中识别的新对象类别,传统的DNN将无法识别它们由于它需要的基本配方。目前的解决方案通常是从新扩展的数据集中重新设计并重新学习整个网络,或者使用新的配置进行新配置以适应新的知识。这个过程与人类学习者的进程完全不同。在本文中,我们提出了一种新的学习方法,名为ACCRetionary学习(AL)以模拟人类学习,因为可以不预先指定要识别的对象集。相应的学习结构是模块化的,可以动态扩展以注册和使用新知识。在增值学习期间,学习过程不要求系统完全重新设计并重新培训,因为该组对象大小增长。在学习识别新数据类时,所提出的DNN结构不会忘记以前的知识。我们表明,新的结构和设计方法导致了一个系统,可以增长以应对增加的认知复杂性,同时提供稳定和卓越的整体性能。
translated by 谷歌翻译
深度学习方法已被证明可以有效地表示量子多体系统的地面波函数。现有方法由于其图像样结构而使用卷积神经网络(CNN)进行方格。对于非方格晶格,现有方法使用图形神经网络(GNN),其中未精确捕获结构信息,从而需要其他手工制作的Sublattice编码。在这项工作中,我们提出了晶格卷积,其中使用一组建议的操作将非方格晶格转换为类似网格的增强晶格,可以在上进行定期卷积。根据提议的晶格卷积,我们设计了使用自我门控和注意机制的晶格卷积网络(LCN)。实验结果表明,我们的方法在PAR上的性能或比Spin 1/2 $ J_1 $ - $ J_2 $ HEISENBERG模型在Square,Honeycomb,Triangular和Kagome Lattices上的现有方法更好,而无需使用手工制作的编码。
translated by 谷歌翻译
基于深度神经网络(DNN)的超分辨率算法大大提高了所生成的图像的质量。然而,由于学习错位光学变焦的困难,这些算法通常会在处理现实世界超分辨率问题时产生重要的伪像。在本文中,我们介绍了一个平方可变形对准网络(SDAN)来解决这个问题。我们的网络了解卷积内核的平方每点偏移,然后基于偏移来对齐纠正卷积窗口的功能。因此,通过提取的对齐的特征将最小化未对准。与Vanilla可变形卷积网络(DCN)中使用的每点偏移不同,我们提出的平方抵消不仅加速了偏移学习,而且还提高了更少参数的发电质量。此外,我们进一步提出了一种高效的交叉包装注意层来提高学习偏移的准确性。它利用包装和解包操作来扩大偏移学习的接收领域,并增强提取低分辨率图像与参考图像之间的空间连接的能力。综合实验在计算效率和现实细节方面表现出我们对其他最先进的方法的方法。
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Text clustering and topic extraction are two important tasks in text mining. Usually, these two tasks are performed separately. For topic extraction to facilitate clustering, we can first project texts into a topic space and then perform a clustering algorithm to obtain clusters. To promote topic extraction by clustering, we can first obtain clusters with a clustering algorithm and then extract cluster-specific topics. However, this naive strategy ignores the fact that text clustering and topic extraction are strongly correlated and follow a chicken-and-egg relationship. Performing them separately fails to make them mutually benefit each other to achieve the best overall performance. In this paper, we propose an unsupervised text clustering and topic extraction framework (ClusTop) which integrates text clustering and topic extraction into a unified framework and can achieve high-quality clustering result and extract topics from each cluster simultaneously. Our framework includes four components: enhanced language model training, dimensionality reduction, clustering and topic extraction, where the enhanced language model can be viewed as a bridge between clustering and topic extraction. On one hand, it provides text embeddings with a strong cluster structure which facilitates effective text clustering; on the other hand, it pays high attention on the topic related words for topic extraction because of its self-attention architecture. Moreover, the training of enhanced language model is unsupervised. Experiments on two datasets demonstrate the effectiveness of our framework and provide benchmarks for different model combinations in this framework.
translated by 谷歌翻译